223 research outputs found

    Perceiving Unknown in Dark from Perspective of Cell Vibration

    Full text link
    Low light very likely leads to the degradation of image quality and even causes visual tasks' failure. Existing image enhancement technologies are prone to over-enhancement or color distortion, and their adaptability is fairly limited. In order to deal with these problems, we utilise the mechanism of biological cell vibration to interpret the formation of color images. In particular, we here propose a simple yet effective cell vibration energy (CVE) mapping method for image enhancement. Based on a hypothetical color-formation mechanism, our proposed method first uses cell vibration and photoreceptor correction to determine the photon flow energy for each color channel, and then reconstructs the color image with the maximum energy constraint of the visual system. Photoreceptor cells can adaptively adjust the feedback from the light intensity of the perceived environment. Based on this understanding, we here propose a new Gamma auto-adjustment method to modify Gamma values according to individual images. Finally, a fusion method, combining CVE and Gamma auto-adjustment (CVE-G), is proposed to reconstruct the color image under the constraint of lightness. Experimental results show that the proposed algorithm is superior to six state of the art methods in avoiding over-enhancement and color distortion, restoring the textures of dark areas and reproducing natural colors. The source code will be released at https://github.com/leixiaozhou/CVE-G-Resource-Base.Comment: 13 pages, 17 figure

    Anisotropic mean shift based fuzzy c-means segmentation of deroscopy images

    Get PDF
    Image segmentation is an important task in analysing dermoscopy images as the extraction of the borders of skin lesions provides important cues for accurate diagnosis. One family of segmentation algorithms is based on the idea of clustering pixels with similar characteristics. Fuzzy c-means has been shown to work well for clustering based segmentation, however due to its iterative nature this approach has excessive computational requirements. In this paper, we introduce a new mean shift based fuzzy c-means algorithm that requires less computational time than previous techniques while providing good segmentation results. The proposed segmentation method incorporates a mean field term within the standard fuzzy c-means objective function. Since mean shift can quickly and reliably find cluster centers, the entire strategy is capable of effectively detecting regions within an image. Experimental results on a large dataset of diverse dermoscopy images demonstrate that the presented method accurately and efficiently detects the borders of skin lesions

    Preface

    Full text link
    Recent Advances in Electrical & Electronic Engineering publishes review and research articles, guest edited thematic issues, and reviews in patents on electrical and electronic engineering and applications. The journal also covers research in fast emerging applications of electrical power supply, electrical systems, power transmission, electromagnetism, motor control process and technologies involved and related to electrical and electronic engineering. In this issue, I am very pleased to introduce some of the recent research advances in electrical systems, power control, circuit designs and video processing. Taking this opportunity, I would like to thank all the authors, reviewers and the editorials of the journal for their professional dedication and commitments

    Preface

    Full text link
    Prefac

    Preface

    Full text link
    Prefac

    Texture Synthesis Quality Assessment Using Perceptual Texture Similarity

    Full text link
    Texture synthesis plays an important role in computer game and movie industries. Although it has been widely studied, the assessment of the quality of the synthesised textures has received little attention. Inspired by the research progress in perceptual texture similarity estimation, we propose a Texture Synthesis Quality Assessment (TSQA) approach. To our knowledge, this is the first attempt to exploit perceptual texture similarity for the TSQA task. In particular, we introduce two perceptual similarity principles for synthesis quality assessment. Correspondingly, we train two Random Forest (RF) regressors. Given a pair of sample and synthesised textures, the two regressors can be used to predict the global and local quality scores of the synthesised texture respectively. An overall score is generated from the two scores. Our results show that the deep Bag-of-Words (BoW) descriptors, extracted by a pre-trained Convolutional Neural Network (CNN), perform better than, or comparably to, the other nine types of hand-crafted or CNN descriptors and an image quality assessment measure, together with the proposed TSQA approach

    Texture Classification Using Pair-wise Difference Pooling Based Bilinear Convolutional Neural Networks

    Full text link
    Texture is normally represented by aggregating local features based on the assumption of spatial homogeneity. Effective texture features are always the research focus even though both hand-crafted and deep learning approaches have been extensively investigated. Motivated by the success of Bilinear Convolutional Neural Networks (BCNNs) in fine-grained image recognition, we propose to incorporate the BCNN with the Pair-wise Difference Pooling (i.e. BCNN-PDP) for texture classification. The BCNN-PDP is built on top of a set of feature maps extracted at a convolutional layer of the pre-trained CNN. Compared with the outer product used by the original BCNN feature set, the pair-wise difference not only captures the pair-wise relationship between two sets of features but also encodes the difference between each pair of features. Considering the importance of the gradient data to the representation of image structures, we further generalise the BCNN-PDP feature set to two sets of feature maps computed from the original image and its gradient magnitude map respectively, i.e. the Fused BCNN-PDP (F-BCNN-PDP) feature set. In addition, the BCNN-PDP can be applied to two different CNNs and is referred to as the Asymmetric BCNN-PDP (A-BCNN-PDP). The three PDP-based BCNN feature sets can also be extracted at multiple scales. Since the dimensionality of the BCNN feature vectors is very high, we propose a new yet simple Block-wise PCA (BPCA) method in order to derive more compact feature vectors. The proposed methods are tested on seven different datasets along with 21 baseline feature sets. The results show that the proposed feature sets are superior, or at least comparable, to their counterparts across different datasets.<br

    A Robust Parallel Object Tracking Method for Illumination Variations

    Full text link
    Illumination variation often occurs in visual tracking, which has a severe impact on the system performance. Many trackers based on Discriminative correlation filter (DCF) have recently obtained promising performance, showing robustness to illumination variation. However, when the target objects undergo significant appearance variation due to intense illumination variation, the features extracted from the object will not have the ability to be discriminated from the background, which causes the tracking algorithm to lose the target in the scene. In this paper, in order to improve the accuracy and robustness of the Discriminative correlation filter (DCF) trackers under intense illumination variation, we propose a very effective strategy by performing multiple region detection and using alternate templates (MRAT). Based on parallel computation, we are able to perform simultaneous detection of multiple regions, equivalently enlarging the search region. Meanwhile the alternate template is saved by a template update mechanism in order to improve the accuracy of the tracker under strong illumination variation. Experimental results on large-scale public benchmark datasets show the effectiveness of the proposed method compared to state-of-the-art methods

    Perceptual image quality using Dual generative adversarial network

    Full text link
    Generative adversarial networks have received a remarkable success in many computer vision applications for their ability to learn from complex data distribution. In particular, they are capable to generate realistic images from latent space with a simple and intuitive structure. The main focus of existing models has been improving the performance; however, there is a little attention to make a robust model. In this paper, we investigate solutions to the super-resolution problems—in particular perceptual quality—by proposing a robust GAN. The proposed model unlike the standard GAN employs two generators and two discriminators in which, a discriminator determines that the samples are from real data or generated one, while another discriminator acts as classifier to return the wrong samples to its corresponding generators. Generators learn a mixture of many distributions from prior to the complex distribution. This new methodology is trained with the feature matching loss and allows us to return the wrong samples to the corresponding generators, in order to regenerate the real-look samples. Experimental results in various datasets show the superiority of the proposed model compared to the state of the art methods
    • …
    corecore